4 research outputs found

    An efficient hardware architecture for a neural network activation function generator

    Get PDF
    This paper proposes an efficient hardware architecture for a function generator suitable for an artificial neural network (ANN). A spline-based approximation function is designed that provides a good trade-off between accuracy and silicon area, whilst also being inherently scalable and adaptable for numerous activation functions. This has been achieved by using a minimax polynomial and through optimal placement of the approximating polynomials based on the results of a genetic algorithm. The approximation error of the proposed method compares favourably to all related research in this field. Efficient hardware multiplication circuitry is used in the implementation, which reduces the area overhead and increases the throughput

    Digital hardware implementation of a neural system used for nonlinear adaptive prediction

    No full text
    Neural networks have been widely used for many applications in digital communications. They are able to give solutions to complex problems due to their nonlinear processing and their learning and generalization. Neural networks are one of the key technologies for the communication domain and accordingly a special effort may be expected to be paid to real time hardware implementation issues. In this study, it is proposed a digital hardware implementation of a neural system based on a multilayer perceptron (MLP). The neural system is used for the nonlinear adaptive prediction of nonstationary signals such as speech signals. The implemented architecture of the MLP is generated using a generic elementary neuron (EN). The polynomial approximation method is used to implement the sigmoidal activation function. The back-propagation algorithm is used to implant the prediction task. The circuit implementation architecture is detailed, for achieving real-time prediction for speech signals. The designed ASIC circuit includes a neural network block, an on-chip learning block and a memory used for storing the synaptic weights for updating
    corecore